Note: Episodes listed below are ordered based on how likely they are to match your search request.
"Is this concern that, hey, AI safety is a real thing. And that's not just. It's not just sort of AI bias or AI replacing jobs. It's also catastrophic risks that might arise from the rapid advancement in AI. Yeah, I think probably listeners are generally familiar with Givewell, which is this charitable organization that does a lot of research for understanding."
"I'm contributing to the growth of this beautiful machine so that we can seek to the stars. That's really inspiring. That's also a sort of neuro hack. So you're saying AI safety is important to you, but right now the landscape of ideas you see is AI safety as a topic is used more often to gain centralized control. So in that sense, you're resisting it as a proxy for centralized, gaining centralized control."
"Any significant advances in AI safety research could inform researchers working on the most powerful models. Other countries might also develop liability standards for the creators of AI systems that could incentivize corporations to proceed cautiously before releasing models. And at some point, there may be AI treaties and international regulations. Just as the international community has created the International Atomic Energy Agency, the Biological Weapons Convention, and the Intergovernmental Panel on Climate Change to coordinate around and mitigate other global catastrophic threats, efforts to coordinate governments around the world to understand and share information about threats posed by AI may end up being extremely important in some future scenario. The Organization for Economic Cooperation and Development, for instance, has already created the AI policy Observatory."
"It's not going to violate my privacy, it's not going to make judgments on me that could greatly negatively impact my life. But in the paper, it explicitly says that none of these things are what they mean by safety. It just refers to, for them, essentially preventing AI from becoming rogue and misaligned. Of course, because of this confusion, companies can say we are investing a lot in AI safety. We care a lot about keeping AI safe, and that does a lot of work for them in the public."
"Is there something you could say about ideas you have just on looking, having thought about this problem of AI safety, how to release such system, how to test such systems when you have them inside the company?"
"Okay? That's what AI safety is. AI safety is, can you turn it off? Can you kill it? Can you stop it from controlling drones?"
"Yeah, so while patient data is certainly one of the big areas that people are talking about with AI, looking at it from a clinical engineering standpoint can provide both technical and administrative benefits to those clinical engineering teams. AI can be incorporated into tools to monitor performance of medical devices, potentially early warning signs of a device failure and preventing those failures. In large capital equipment such as a CT scanner can be a real cost saving benefit for a health system that can decrease unplanned outages, improve utilization time, and of course, ensure patient safety. AI can also be used to help those biomedical technicians by taking over some of the administrative process. So if we think about automating tasks like documentation in work orders that can be used where we know today, something like chat, GPT can take this podcast and create a summary."
"One of the reasons I'm so interested in AI safety standards is because it is kind of no matter what risk you're worried about, I think you hopefully should be able to get on board with the idea that you should measure the risk and not unwittingly deploy AI systems that are carrying a ton of the risk before you've at least made a deliberate, informed decision to do so. And I think if we do that, we can anticipate a lot of different risks and stop them from coming at us too fast. Too fast is the central theme for me. A common story in some corners of this discourse is this idea of an AI, that it's this kind of simple computer program and it rewrites its own source code and that's where all the action is. I don't think that's exactly the picture I have in mind, although there's some similarities."
"I mean, now the model is laughable. They used AI safety to hype up their company and it's disgusting. Or the flip side of that is they used a relatively weak model in retrospect to explore how do we do AI safety correctly? How do we release things? How do we go through the process?"
"This kind of security mindset research is critical because in all likelihood, closing the capabilities robustness gap will require both conceptual breakthroughs and a lot of diligent work. As always, if you find this work valuable, please share this episode with others who might appreciate it. Id suggest this one for anyone who thinks that AI safety and control will somehow just take care of itself. And always feel free to reach out with feedback or suggestions via our website, cognitiverevolution AI, or by messaging me on your favorite social network. Now, please enjoy this overview of the current state of AI robustness, safety and control with Adam Glieve of Far AI."
"To help protect users, we encourage AI companies to provide two levels of protection to research. First, a legal safe harbor would indemnify good faith independent AI safety, security and trustworthiness research, provided it is conducted in accordance with well established vulnerability disclosure rules. Second, companies should commit to more equitable access by using independent reviewers to moderate researchers'evaluation applications, which would protect rule abiding safety research from counterproductive account suspensions and mitigate the concern of companies selecting their own evaluators. While these basic commitments will not solve every issue surrounding responsible AI today, it is an important first step on the long roads towards building and evaluating AI in the public interest. So this is interesting because this is not a request for full openness, it's a request for, effectively an openness carve out even within closed systems."
"So first of all, let's talk about the purpose of this group and then we'll get into some of the critiques. The Wall Street Journal writes, the AI Safety and Security Board is part of a government push to protect the economy, public health, and vital industries from being harmed by AI powered threats. Working with the Department of Homeland Security, it will develop recommendations for power grid operators, transportation service providers, and manufacturing plants, among others, on how to use AI while bulletproofing their systems against potential disruption that could be caused by advances in the technology. Now, one of the big things that you'll see a lot of critiques focus in on is represented here by saliferous games, who writes, this is such blatant conflict of interest. They're letting the AI industry write the laws for the AI industry."
"This week, companies that make artificial intelligence tools, including OpenAI, Meta, and Google, agree to incorporate new safety measures. The goal is to protect children from exploitation and plug holes in their current defenses. The problem is the tools that create AI generated images can make innocuous stuff, but they can also create harmful things like sexualized images of children. A new alliance led by a nonprofit called Thorn is leading the charge. WSJ tech reporter Deepa Sitharaman joins me now with more."
"Yeah, this will get, again more and more complicated and sophisticated. And for people who say, no, it's not going to happen, the question is, what is stopping it? In all the discussions about AI, the kind of dangers that draw people's attention, like the poster child of AI dangers, is things like AI creating a new virus that kills billions of people, a new pandemic. So a lot of people concerned about how do we prevent an AI by itself, or maybe some small terrorist organization or even a 16 year old teenager given an AI a task to create a dangerous virus and release it to the world, how do we prevent this? And this is a serious concern, and we should be concerned about it, but this gets a lot more attention than the question, how do we prevent the financial system from becoming so complicated that humans can no longer understand it?"
"Back to the OpenAI safety plan. So I spoke with Jan about this last year. Some listeners will have heard that interview. And so one of your concerns was by the time you have an AI system that's capable of meaningfully doing alignment work and helping to solve the problem that you're concerned about, it should be equally good, maybe better at just doing general AI research and making AI more capable. Whether or not that system is going to be aligned or safe or anything like that, that's an incredibly combustible situation because you're already right at the cusp or in the process of setting off a positive feedback loop where the AI is reprogramming itself, making itself more capable, and then turning that again towards making itself more capable."
"Those saying we should be concerned point out that AI is becoming increasingly good at doing intellectual work and research. As AI technology advances, it could begin doing the work of biologists and allow a small group of terrorists to design viruses and unleash them. Those saying we shouldn't be concerned say that AI is not that capable right now. All the information AI is providing currently is information publicly available on the Internet. They say worrying about speculative risks is a distraction."
"Yeah. And I agree. I don't know, I'm a little bit skeptical about AI safety, but just as a thought experiment, AI could be the technology that if you democratize it, it could just have catastrophic, unforeseen consequences. And then, as you were just alluding to, if you place constraints in the system, then you have all of the problems that you've spent your career talking about. So it's almost like pandora's box that this is the one technology that kind of breaks everything."
"So if you're resistant to one, what is the right next treatment that you can get onto? Once a trial has all its participants, AI yet again comes into play, helping to make that trial itself run more smoothly. AI can be used to help monitor patients in real time, track symptoms and health state, and identify potential adverse events early. And these are just some points of data a trial investigator might be considering. They might also be electronic medical records, genomic biomarker data, radiology images, voice recordings describing symptoms."
"So everybody's invested. Who's invested in this is kind of known that that's the deal. But then they have this additional thing that they are very kind of vocal about and public about, which is this idea of AI safety. And this idea, the original concern of is AI going to basically wake up and destroy everything, take over and wipe out humanity or just cause damage in one of a thousand other ways. And so they also have this thing built into their structure, which basically says if they conclude AI is too dangerous, they'll basically just shut the whole thing down."
"Like trying to understand chemistry, materials, great stuff. Nuclear fusion, carbon capture, biology, and so on. I'm actually not working on mostly anthropomorphic AI. I would say physics based AI is like an extension of our capabilities. And to me, it's kind of funny that the organizations that, maybe not yours, but the organizations that are pushing for AI safety are the ones causing the disruption of 100% agree with that, by the way."